Skip to main content

Use Azure OpenAI Models

Use OpenAI models deployed through Azure services in your pipelines.


For a list of supported models, see Azure documentation.

Prerequisites

You need an Azure OpenAI API key and Azure OpenAI endpoint. For details, see Azure REST API reference.

Use Azure OpenAI

First, connect deepset AI Platform to Azure through the Integrations page. You can set up the connection for a single workspace or for the whole organization:

Add Workspace-Level Integration

  1. Click your profile icon and choose Settings.
  2. Go to Workspace>Integrations.
  3. Find the provider you want to connect and click Connect next to them.
  4. Enter the API key and any other required details.
  5. Click Connect. You can use this integration in pipelines and indexes in the current workspace.

Add Organization-Level Integration

  1. Click your profile icon and choose Settings.
  2. Go to Organization>Integrations.
  3. Find the provider you want to connect and click Connect next to them.
  4. Enter the API key and any other required details.
  5. Click Connect. You can use this integration in pipelines and indexes in all workspaces in the current organization.

Then, add a component that uses an OpenAI model through Azure to your pipeline. Here are the components by the model type they use:

  • Embedding models:

    • AzureOpenAITextEmbedder: Calculates embeddings for text, like query. Often used in query pipelines to embed a query and pass the embedding to an embedding retriever.
    • AzureOpenAIDocumentEmbedder: Calculates embeddings for documents. Often used in indexes to embed documents and pass them to DocumentWriter.

      Embedding Models in Query Pipelines and Indexes

      The embedding model you use to embed documents in your indexing pipeline must be the same as the embedding model you use to embed the query in your query pipeline.

      This means the embedders for your indexing and query pipelines must match. For example, if you use CohereDocumentEmbedder to embed your documents, you should use CohereTextEmbedder with the same model to embed your queries.

  • LLMs:

    • AzureOpenAIGenerator: Generates text using OpenAI models hosted on Azure, often used in RAG pipelines.

Usage Examples

This is an example of how to use embedding models and an LLM hosted on Azure in an index and a query pipeline (each in a separate tab):

components:
...
splitter:
type: haystack.components.preprocessors.document_splitter.DocumentSplitter
init_parameters:
split_by: word
split_length: 250
split_overlap: 30

document_embedder:
type: haystack.components.embedders.azure_document_embedder.AzureOpenAIDocumentEmbedder
init_parameters:
azure_deployment: "text-embedding-ada-002" # this is the name of the model you want to use

writer:
type: haystack.components.writers.document_writer.DocumentWriter
init_parameters:
document_store:
type: haystack_integrations.document_stores.opensearch.document_store.OpenSearchDocumentStore
init_parameters:
embedding_dim: 768
similarity: cosine
policy: OVERWRITE

connections: # Defines how the components are connected
...
- sender: splitter.documents
receiver: document_embedder.documents
- sender: document_embedder.documents
receiver: writer.documents

The components in Pipeline Builder:

  • AzureOpenAIDocumentEmbedder (used in indexing pipelines):

    The OpenAIDocumentEmbedder component card with the model providedt
  • AzureOpenAITextEmbedder (used in query pipelines):

    The AzureOpenAITextEmbedder component card with the model provided
  • AzureOpenAIGenerator:

    The AzureOpenAIGenerator component card with the model provided